Phased Exploration with Greedy Exploitation in Stochastic Combinatorial Partial Monitoring Games

نویسندگان

  • Sougata Chaudhuri
  • Ambuj Tewari
چکیده

Partial monitoring games are repeated games where the learner receives feedback that might be different from adversary’s move or even the reward gained by the learner. Recently, a general model of combinatorial partial monitoring (CPM) games was proposed [1], where the learner’s action space can be exponentially large and adversary samples its moves from a bounded, continuous space, according to a fixed distribution. The paper gave a confidence bound based algorithm (GCB) that achieves O(T 2/3 log T ) distribution independent and O(log T ) distribution dependent regret bounds. The implementation of their algorithm depends on two separate offline oracles and the distribution dependent regret additionally requires existence of a unique optimal action for the learner. Adopting their CPM model, our first contribution is a Phased Exploration with Greedy Exploitation (PEGE) algorithmic framework for the problem. Different algorithms within the framework achieve O(T 2/3 √ log T ) distribution independent and O(log T ) distribution dependent regret respectively. Crucially, our framework needs only the simpler “argmax” oracle from GCB and the distribution dependent regret does not require existence of a unique optimal action. Our second contribution is another algorithm, PEGE2, which combines gap estimation with a PEGE algorithm, to achieve an O(log T ) regret bound, matching the GCB guarantee but removing the dependence on size of the learner’s action space. However, like GCB, PEGE2 requires access to both offline oracles and the existence of a unique optimal action. Finally, we discuss how our algorithm can be efficiently applied to a CPM problem of practical interest: namely, online ranking with feedback at the top.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Online Greedy Learning with Semi-bandit Feedbacks

The greedy algorithm is extensively studied in the field of combinatorial optimization for decades. In this paper, we address the online learning problem when the input to the greedy algorithm is stochastic with unknown parameters that have to be learned over time. We first propose the greedy regret and -quasi greedy regret as learning metrics comparing with the performance of offline greedy al...

متن کامل

A hybrid metaheuristic using fuzzy greedy search operator for combinatorial optimization with specific reference to the travelling salesman problem

We describe a hybrid meta-heuristic algorithm for combinatorial optimization problems with a specific reference to the travelling salesman problem (TSP). The method is a combination of a genetic algorithm (GA) and greedy randomized adaptive search procedure (GRASP). A new adaptive fuzzy a greedy search operator is developed for this hybrid method. Computational experiments using a wide range of...

متن کامل

Best-First Width Search: Exploration and Exploitation in Classical Planning

It has been shown recently that the performance of greedy best-first search (GBFS) for computing plans that are not necessarily optimal can be improved by adding forms of exploration when reaching heuristic plateaus: from random walks to local GBFS searches. In this work, we address this problem but using structural exploration methods resulting from the ideas of width-based search. Width-based...

متن کامل

A Near-Optimal Poly-Time Algorithm for Learning a class of Stochastic Games

We present a new algorithm for polynomial time learning of near optimal behavior in stochastic games. This algorithm incorporates and integrates important recent results of Kearns and Singh [ 1998] in reinforcement learning and of Monderer and Tennenholtz [1997] in repeated games. In stochastic games we face an exploration vs. exploitation dilemma more complex than in Markov decision processes....

متن کامل

Reactive Max-min Ant System: an Experimental Analysis of the Combination with K-opt Local Searche

Ant colony optimization (ACO) is a stochastic search method for solving NP-hard problems. The exploration versus exploitation dilemma rises in ACO search. Reactive max-min ant system algorithm is a recent proposition to automate the exploration and exploitation. It memorizes the search regions in terms of reactive heuristics to be harnessed after restart, which is to avoid the arbitrary explora...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016